44 research outputs found

    Relaxed Gaussian process interpolation: a goal-oriented approach to Bayesian optimization

    Full text link
    This work presents a new procedure for obtaining predictive distributions in the context of Gaussian process (GP) modeling, with a relaxation of the interpolation constraints outside some ranges of interest: the mean of the predictive distributions no longer necessarily interpolates the observed values when they are outside ranges of interest, but are simply constrained to remain outside. This method called relaxed Gaussian process (reGP) interpolation provides better predictive distributions in ranges of interest, especially in cases where a stationarity assumption for the GP model is not appropriate. It can be viewed as a goal-oriented method and becomes particularly interesting in Bayesian optimization, for example, for the minimization of an objective function, where good predictive distributions for low function values are important. When the expected improvement criterion and reGP are used for sequentially choosing evaluation points, the convergence of the resulting optimization algorithm is theoretically guaranteed (provided that the function to be optimized lies in the reproducing kernel Hilbert spaces attached to the known covariance of the underlying Gaussian process). Experiments indicate that using reGP instead of stationary GP models in Bayesian optimization is beneficial

    Bayesian optimization for materials design

    Full text link
    We introduce Bayesian optimization, a technique developed for optimizing time-consuming engineering simulations and for fitting machine learning models on large datasets. Bayesian optimization guides the choice of experiments during materials design and discovery to find good material designs in as few experiments as possible. We focus on the case when materials designs are parameterized by a low-dimensional vector. Bayesian optimization is built on a statistical technique called Gaussian process regression, which allows predicting the performance of a new design based on previously tested designs. After providing a detailed introduction to Gaussian process regression, we introduce two Bayesian optimization methods: expected improvement, for design problems with noise-free evaluations; and the knowledge-gradient method, which generalizes expected improvement and may be used in design problems with noisy evaluations. Both methods are derived using a value-of-information analysis, and enjoy one-step Bayes-optimality

    Sequential design of computer experiments for the estimation of a probability of failure

    Full text link
    This paper deals with the problem of estimating the volume of the excursion set of a function f:Rd→Rf:\mathbb{R}^d \to \mathbb{R} above a given threshold, under a probability measure on Rd\mathbb{R}^d that is assumed to be known. In the industrial world, this corresponds to the problem of estimating a probability of failure of a system. When only an expensive-to-simulate model of the system is available, the budget for simulations is usually severely limited and therefore classical Monte Carlo methods ought to be avoided. One of the main contributions of this article is to derive SUR (stepwise uncertainty reduction) strategies from a Bayesian-theoretic formulation of the problem of estimating a probability of failure. These sequential strategies use a Gaussian process model of ff and aim at performing evaluations of ff as efficiently as possible to infer the value of the probability of failure. We compare these strategies to other strategies also based on a Gaussian process model for estimating a probability of failure.Comment: This is an author-generated postprint version. The published version is available at http://www.springerlink.co

    On Bayesian Search for the Feasible Space Under Computationally Expensive Constraints

    Get PDF
    We are often interested in identifying the feasible subset of a decision space under multiple constraints to permit effective design exploration. If determining feasibility required computationally expensive simulations, the cost of exploration would be prohibitive. Bayesian search is data-efficient for such problems: starting from a small dataset, the central concept is to use Bayesian models of constraints with an acquisition function to locate promising solutions that may improve predictions of feasibility when the dataset is augmented. At the end of this sequential active learning approach with a limited number of expensive evaluations, the models can accurately predict the feasibility of any solution obviating the need for full simulations. In this paper, we propose a novel acquisition function that combines the probability that a solution lies at the boundary between feasible and infeasible spaces (representing exploitation) and the entropy in predictions (representing exploration). Experiments confirmed the efficacy of the proposed function

    Quantifying the effects of climate change and water abstraction on a population of barramundi (Lates calcarifer), a diadromous estuarine finfish

    Get PDF
    Many aquatic species are linked to environmental drivers such as temperature and salinity through processes such as spawning, recruitment and growth. Information is needed on how fished species may respond to altered environmental drivers under climate change so that adaptive management strategies can be developed. Barramundi (Lates calcarifer) is a highly prized species of the Indo-West Pacific, whose recruitment and growth is driven by river discharge. We developed a monthly age- and length-structured population model for barramundi. Monte Carlo Markov Chain simulations were used to explore the population's response to altered river discharges under modelled total licenced water abstraction and projected climate change, derived and downscaled from Global Climate Model A1FI. Mean values of exploitable biomass, annual catch, maximum sustainable yield and spawning stock size were significantly reduced under scenarios where river discharge was reduced; despite including uncertainty. These results suggest that the upstream use of water resources and climate change have potential to significantly reduce downstream barramundi stock sizes and harvests and may undermine the inherent resilience of estuarine-dependent fisheries. © 2012 CSIRO

    Level set estimation with search space warping

    Full text link

    User preferences in Bayesian multi-objective optimization: the expected weighted hypervolume improvement criterion

    No full text
    To be published in the proceedings of LOD 2018 – The Fourth International Conference on Machine Learning, Optimization, and Data Science – September 13-16, 2018 – Volterra, Tuscany, ItalyIn this article, we present a framework for taking into account user preferences in multi-objective Bayesian optimization in the case where the objectives are expensive-to-evaluate black-box functions. A novel expected improvement criterion to be used within Bayesian optimization algorithms is introduced. This criterion, which we call the expected weighted hypervolume improvement (EWHI) criterion, is a generalization of the popular expected hypervolume improvement to the case where the hypervolume of the dominated region is defined using an absolutely continuous measure instead of the Lebesgue measure. The EWHI criterion takes the form of an integral for which no closed form expression exists in the general case. To deal with its computation, we propose an importance sampling approximation method. A sampling density that is optimal for the computation of the EWHI for a predefined set of points is crafted and a sequential Monte-Carlo (SMC) approach is used to obtain a sample approximately distributed from this density. The ability of the criterion to produce optimization strategies oriented by user preferences is demonstrated on a simple bi-objective test problem in the cases of a preference for one objective and of a preference for certain regions of the Pareto front

    Locally Parallel Textures Modeling with Adapted Hilbert Spaces

    No full text
    This article presents a new adaptive texture model. Locally parallel oscillating patterns are modeled with a weighted Hilbert space defined over local Fourier coefficients. The weights on the local Fourier atoms are optimized to match the local orientation and frequency of the texture. We propose an adaptive method to decompose an image into a cartoon layer and a locally parallel texture layer using this model and a total variation cartoon model. This decomposition method is then used to denoise an image containing oscillating patterns. Finally we show how to take advantage of such a separation framework to simultaneously inpaint the structure and texture components of an image with missing parts. Numerical results show that our method improves state of the art algorithms for directional and complex textures.ou
    corecore